32 research outputs found

    Silhouette-based gait recognition using Procrustes shape analysis and elliptic Fourier descriptors

    Get PDF
    This paper presents a gait recognition method which combines spatio-temporal motion characteristics, statistical and physical parameters (referred to as STM-SPP) of a human subject for its classification by analysing shape of the subject's silhouette contours using Procrustes shape analysis (PSA) and elliptic Fourier descriptors (EFDs). STM-SPP uses spatio-temporal gait characteristics and physical parameters of human body to resolve similar dissimilarity scores between probe and gallery sequences obtained by PSA. A part-based shape analysis using EFDs is also introduced to achieve robustness against carrying conditions. The classification results by PSA and EFDs are combined, resolving tie in ranking using contour matching based on Hu moments. Experimental results show STM-SPP outperforms several silhouette-based gait recognition methods

    Uniscale and multiscale gait recognition in realistic scenario

    Get PDF
    The performance of a gait recognition method is affected by numerous challenging factors that degrade its reliability as a behavioural biometrics for subject identification in realistic scenario. Thus for effective visual surveillance, this thesis presents five gait recog- nition methods that address various challenging factors to reliably identify a subject in realistic scenario with low computational complexity. It presents a gait recognition method that analyses spatio-temporal motion of a subject with statistical and physical parameters using Procrustes shape analysis and elliptic Fourier descriptors (EFD). It introduces a part- based EFD analysis to achieve invariance to carrying conditions, and the use of physical parameters enables it to achieve invariance to across-day gait variation. Although spatio- temporal deformation of a subject’s shape in gait sequences provides better discriminative power than its kinematics, inclusion of dynamical motion characteristics improves the iden- tification rate. Therefore, the thesis presents a gait recognition method which combines spatio-temporal shape and dynamic motion characteristics of a subject to achieve robust- ness against the maximum number of challenging factors compared to related state-of-the- art methods. A region-based gait recognition method that analyses a subject’s shape in image and feature spaces is presented to achieve invariance to clothing variation and carry- ing conditions. To take into account of arbitrary moving directions of a subject in realistic scenario, a gait recognition method must be robust against variation in view. Hence, the the- sis presents a robust view-invariant multiscale gait recognition method. Finally, the thesis proposes a gait recognition method based on low spatial and low temporal resolution video sequences captured by a CCTV. The computational complexity of each method is analysed. Experimental analyses on public datasets demonstrate the efficacy of the proposed methods

    Clothing and carrying condition invariant gait recognition based on rotation forest

    Get PDF
    This paper proposes a gait recognition method which is invariant to maximum number of challenging factors of gait recognition mainly unpredictable variation in clothing and carrying conditions. The method introduces an averaged gait key-phase image (AGKI) which is computed by averaging each of the five key-phases of the gait periods of a gait sequence. It analyses the AGKIs using high-pass and low-pass Gaussian filters, each at three cut-off frequencies to achieve robustness against unpredictable variation in clothing and carrying conditions in addition to other covariate factors, e.g., walking speed, segmentation noise, shadows under feet and change in hair style and ground surface. The optimal cut-off frequencies of the Gaussian filters are determined based on an analysis of the focus values of filtered human subject’s silhouettes. The method applies rotation forest ensemble learning recognition to enhance both individual accuracy and diversity within the ensemble for improved identification rate. Extensive experiments on public datasets demonstrate the efficacy of the proposed method

    Leveraging Image Analysis for High-Throughput Plant Phenotyping

    Get PDF
    The complex interaction between a genotype and its environment controls the biophysical properties of a plant, manifested in observable traits, i.e., plant's phenome, which influences resources acquisition, performance, and yield. High-throughput automated image-based plant phenotyping refers to the sensing and quantifying plant traits non-destructively by analyzing images captured at regular intervals and with precision. While phenomic research has drawn significant attention in the last decade, extracting meaningful and reliable numerical phenotypes from plant images especially by considering its individual components, e.g., leaves, stem, fruit, and flower, remains a critical bottleneck to the translation of advances of phenotyping technology into genetic insights due to various challenges including lighting variations, plant rotations, and self-occlusions. The paper provides (1) a framework for plant phenotyping in a multimodal, multi-view, time-lapsed, high-throughput imaging system; (2) a taxonomy of phenotypes that may be derived by image analysis for better understanding of morphological structure and functional processes in plants; (3) a brief discussion on publicly available datasets to encourage algorithm development and uniform comparison with the state-of-the-art methods; (4) an overview of the state-of-the-art image-based high-throughput plant phenotyping methods; and (5) open problems for the advancement of this research field

    Leveraging Image Analysis for High-Throughput Plant Phenotyping

    Get PDF
    The complex interaction between a genotype and its environment controls the biophysical properties of a plant, manifested in observable traits, i.e., plant’s phenome, which influences resources acquisition, performance, and yield. High-throughput automated image-based plant phenotyping refers to the sensing and quantifying plant traits non-destructively by analyzing images captured at regular intervals and with precision. While phenomic research has drawn significant attention in the last decade, extracting meaningful and reliable numerical phenotypes from plant images especially by considering its individual components, e.g., leaves, stem, fruit, and flower, remains a critical bottleneck to the translation of advances of phenotyping technology into genetic insights due to various challenges including lighting variations, plant rotations, and self-occlusions. The paper provides (1) a framework for plant phenotyping in a multimodal, multi-view, time-lapsed, high-throughput imaging system; (2) a taxonomy of phenotypes that may be derived by image analysis for better understanding of morphological structure and functional processes in plants; (3) a brief discussion on publicly available datasets to encourage algorithm development and uniform comparison with the state-of-the-art methods; (4) an overview of the state-of-the-art image-based high-throughput plant phenotyping methods; and (5) open problems for the advancement of this research field

    Increasing Predictive Ability by Modeling Interactions between Environments, Genotype and Canopy Coverage Image Data for Soybeans

    Get PDF
    Phenomics is a new area that offers numerous opportunities for its applicability in plant breeding. One possibility is to exploit this type of information obtained from early stages of the growing season by combining it with genomic data. This opens an avenue that can be capitalized by improving the predictive ability of the common prediction models used for genomic prediction. Imagery (canopy coverage) data recorded between days 14–71 using two collection methods (ground information in 2013 and 2014; aerial information in 2014 and 2015) on a soybean nested association mapping population (SoyNAM) was used to calibrate the prediction models together with the inclusion of several types of interactions between canopy coverage data, environments, and genomic data. Three different scenarios were considered that breeders might face testing lines in fields: (i) incomplete field trials (CV2); (ii) newly developed lines (CV1); and (iii) predicting lines in unobserved environments (CV0). Two different traits were evaluated in this study: yield and days to maturity (DTM). Results showed improvements in the predictive ability for yield with respect to those models that solely included genomic data. These relative improvements ranged 27–123%, 27–148%, and 65–165% for CV2, CV1, and CV0, respectively. No major changes were observed for DTM. Similar improvements were observed for both traits when the reduced canopy information for days 14–33 was used to build the training-testing relationships, showing a clear advantage of using phenomics in very early stages of the growing season

    OSC-CO2: coattention and cosegmentation framework for plant state change with multiple features

    Get PDF
    Cosegmentation and coattention are extensions of traditional segmentation methods aimed at detecting a common object (or objects) in a group of images. Current cosegmentation and coattention methods are ineffective for objects, such as plants, that change their morphological state while being captured in different modalities and views. The Object State Change using Coattention-Cosegmentation (OSC-CO2) is an end-to-end unsupervised deep-learning framework that enhances traditional segmentation techniques, processing, analyzing, selecting, and combining suitable segmentation results that may contain most of our target object’s pixels, and then displaying a final segmented image. The framework leverages coattention-based convolutional neural networks (CNNs) and cosegmentation-based dense Conditional Random Fields (CRFs) to address segmentation accuracy in high-dimensional plant imagery with evolving plant objects. The efficacy of OSC-CO2 is demonstrated using plant growth sequences imaged with infrared, visible, and fluorescence cameras in multiple views using a remote sensing, high-throughput phenotyping platform, and is evaluated using Jaccard index and precision measures. We also introduce CosegPP+, a dataset that is structured and can provide quantitative information on the efficacy of our framework. Results show that OSC-CO2 out performed state-of-the art segmentation and cosegmentation methods by improving segementation accuracy by 3% to 45%

    Multi-feature data repository development and analytics for image cosegmentation in high-throughput plant phenotyping

    Get PDF
    Cosegmentation is a newly emerging computer vision technique used to segment an object from the background by processing multiple images at the same time. Traditional plant phenotyping analysis uses thresholding segmentation methods which result in high segmentation accuracy. Although there are proposed machine learning and deep learning algorithms for plant segmentation, predictions rely on the specific features being present in the training set. The need for a multi-featured dataset and analytics for cosegmentation becomes critical to better understand and predict plants’ responses to the environment. High-throughput phenotyping produces an abundance of data that can be leveraged to improve segmentation accuracy and plant phenotyping. This paper introduces four datasets consisting of two plant species, Buckwheat and Sunflower, each split into control and drought conditions. Each dataset has three modalities (Fluorescence, Infrared, and Visible) with 7 to 14 temporal images that are collected in a high-throughput facility at the University of Nebraska-Lincoln. The four datasets (which will be collected under the CosegPP data repository in this paper) are evaluated using three cosegmentation algorithms: Markov random fields-based, Clustering-based, and Deep learning-based cosegmentation, and one commonly used segmentation approach in plant phenotyping. The integration of CosegPP with advanced cosegmentation methods will be the latest benchmark in comparing segmentation accuracy and finding areas of improvement for cosegmentation methodology

    Leveraging Image Analysis to Compute 3D Plant Phenotypes Based on Voxel-Grid Plant Reconstruction

    Get PDF
    High throughput image-based plant phenotyping facilitates the extraction of morphological and biophysical traits of a large number of plants non-invasively in a relatively short time. It facilitates the computation of advanced phenotypes by considering the plant as a single object (holistic phenotypes) or its components, i.e., leaves and the stem (component phenotypes). The architectural complexity of plants increases over time due to variations in self-occlusions and phyllotaxy, i.e., arrangements of leaves around the stem. One of the central challenges to computing phenotypes from 2-dimensional (2D) single view images of plants, especially at the advanced vegetative stage in presence of self-occluding leaves, is that the information captured in 2D images is incomplete, and hence, the computed phenotypes are inaccurate. We introduce a novel algorithm to compute 3-dimensional (3D) plant phenotypes from multiview images using voxel-grid reconstruction of the plant (3DPhenoMV). The paper also presents a novel method to reliably detect and separate the individual leaves and the stem from the 3D voxel-grid of the plant using voxel overlapping consistency check and point cloud clustering techniques. To evaluate the performance of the proposed algorithm, we introduce the University of Nebraska-Lincoln 3D Plant Phenotyping Dataset (UNL-3DPPD). A generic taxonomy of 3D image-based plant phenotypes are also presented to promote 3D plant phenotyping research. A subset of these phenotypes are computed using computer vision algorithms with discussion of their significance in the context of plant science. The central contributions of the paper are (a) an algorithm for 3D voxel-grid reconstruction of maize plants at the advanced vegetative stages using images from multiple 2D views; (b) a generic taxonomy of 3D image-based plant phenotypes and a public benchmark dataset, i.e., UNL-3DPPD, to promote the development of 3D image-based plant phenotyping research; and (c) novel voxel overlapping consistency check and point cloud clustering techniques to detect and isolate individual leaves and stem of the maize plants to compute the component phenotypes. Detailed experimental analyses demonstrate the efficacy of the proposed method, and also show the potential of 3D phenotypes to explain the morphological characteristics of plants regulated by genetic and environmental interactions

    A Segmentation-Guided Deep Learning Framework for Leaf Counting

    Get PDF
    Deep learning-based methods have recently provided a means to rapidly and effectively extract various plant traits due to their powerful ability to depict a plant image across a variety of species and growth conditions. In this study, we focus on dealing with two fundamental tasks in plant phenotyping, i.e., plant segmentation and leaf counting, and propose a two-steam deep learning framework for segmenting plants and counting leaves with various size and shape from two-dimensional plant images. In the first stream, a multi-scale segmentation model using spatial pyramid is developed to extract leaves with different size and shape, where the fine-grained details of leaves are captured using deep feature extractor. In the second stream, a regression counting model is proposed to estimate the number of leaves without any pre-detection, where an auxiliary binary mask from segmentation stream is introduced to enhance the counting performance by effectively alleviating the influence of complex background. Extensive pot experiments are conducted CVPPP 2017 Leaf Counting Challenge dataset, which contains images of Arabidopsis and tobacco plants. The experimental results demonstrate that the proposed framework achieves a promising performance both in plant segmentation and leaf counting, providing a reference for the automatic analysis of plant phenotypes
    corecore